injective flow
Simple, Fast and Efficient Injective Manifold Density Estimation with Random Projections
Amin, Ahmad Ayaz, Kazi, Baha Uddin
We introduce Random Projection Flows (RPFs), a principled framework for injective normalizing flows that leverages tools from random matrix theory and the geometry of random projections. RPFs employ random semi-orthogonal matrices, drawn from Haar-distributed orthogonal ensembles via QR decomposition of Gaussian matrices, to project data into lower-dimensional latent spaces for the base distribution. Unlike PCA-based flows or learned injective maps, RPFs are plug-and-play, efficient, and yield closed-form expressions for the Riemannian volume correction term. We demonstrate that RPFs are both theoretically grounded and practically effective, providing a strong baseline for generative modeling and a bridge between random projection theory and normalizing flows.
- North America > Canada > Ontario > Toronto (0.05)
- Europe > France > Hauts-de-France > Nord > Lille (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- (5 more...)
Injective Flows for parametric hypersurfaces
Negri, Marcello Massimo, Aellen, Jonathan, Roth, Volker
Normalizing Flows (NFs) are powerful and efficient models for density estimation. When modeling densities on manifolds, NFs can be generalized to injective flows but the Jacobian determinant becomes computationally prohibitive. Current approaches either consider bounds on the log-likelihood or rely on some approximations of the Jacobian determinant. In contrast, we propose injective flows for parametric hypersurfaces and show that for such manifolds we can compute the Jacobian determinant exactly and efficiently, with the same cost as NFs. Furthermore, we show that for the subclass of star-like manifolds we can extend the proposed framework to always allow for a Cartesian representation of the density. We showcase the relevance of modeling densities on hypersurfaces in two settings. Firstly, we introduce a novel Objective Bayesian approach to penalized likelihood models by interpreting level-sets of the penalty as star-like manifolds. Secondly, we consider Bayesian mixture models and introduce a general method for variational inference by defining the posterior of mixture weights on the probability simplex.
- Europe > Switzerland > Basel-City > Basel (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)
Lifting Architectural Constraints of Injective Flows
Sorrenson, Peter, Draxler, Felix, Rousselot, Armand, Hummerich, Sander, Zimmermann, Lea, Köthe, Ullrich
Generative modeling is one of the most important tasks in machine learning, having numerous applications across vision (Rombach et al., 2022), language modeling (Brown et al., 2020), science (Ardizzone et al., 2018; Radev et al., 2020) and beyond. One of the best-motivated approaches to generative modeling is maximum likelihood training, due to its favorable statistical properties (Hastie et al., 2009). In the continuous setting, exact maximum likelihood training is most commonly achieved by normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2014; Kobyzev et al., 2020) which parameterize an exactly invertible function with a tractable change of variables (log-determinant term). This generally introduces a trade-off between model expressivity and computational cost, where the cheapest networks to train and sample from, such as coupling block architectures, require very specifically constructed functions which may limit expressivity (Draxler et al., 2022). In addition, normalizing flows preserve the dimensionality of the inputs, requiring a latent space of the same dimension as the data space.
Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows
Ross, Brendan Leigh, Cresswell, Jesse C.
Normalizing flows are generative models that provide tractable density estimation by transforming a simple base distribution into a complex target distribution. However, this technique cannot directly model data supported on an unknown low-dimensional manifold, a common occurrence in real-world domains such as image data. Recent attempts to remedy this limitation have introduced geometric complications that defeat a central benefit of normalizing flows: exact density estimation. We recover this benefit with Conformal Embedding Flows, a framework for designing flows that learn manifolds with tractable densities. We argue that composing a standard flow with a trainable conformal embedding is the most natural way to model manifold-supported data. To this end, we present a series of conformal building blocks and apply them in experiments with real-world and synthetic data to demonstrate that flows can model manifold-supported distributions without sacrificing tractable likelihoods.
Normalizing Flows Across Dimensions
Cunningham, Edmond, Zabounidis, Renos, Agrawal, Abhinav, Fiterau, Ina, Sheldon, Daniel
Real-world data with underlying structure, such as pictures of faces, are hypothesized to lie on a low-dimensional manifold. This manifold hypothesis has motivated state-of-the-art generative algorithms that learn low-dimensional data representations. Unfortunately, a popular generative model, normalizing flows, cannot take advantage of this. Normalizing flows are based on successive variable transformations that are, by design, incapable of learning lower-dimensional representations. In this paper we introduce noisy injective flows (NIF), a generalization of normalizing flows that can go across dimensions. NIF explicitly map the latent space to a learnable manifold in a high-dimensional data space using injective transformations. We further employ an additive noise model to account for deviations from the manifold and identify a stochastic inverse of the generative process. Empirically, we demonstrate that a simple application of our method to existing flow architectures can significantly improve sample quality and yield separable data embeddings.
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.05)
- Asia > Middle East > Jordan (0.04)
- (6 more...)